We use three different shapefiles for the continental U.S. land mass, the State waters of maine, new hampshire, massachusets, connecticut, rhode island, new york, new jersey, delaware, maryland, virginia, north carolina and the North East EEZ. Moreover, we use the NOAA dataset of landings per port.
Covers the US land territory for visualization. Data provided from the map package.
States <- c("maine", "new hampshire", "massachusetts", "connecticut", "rhode island", "new york", "new jersey", "delaware", "maryland", "virginia", "north carolina")
# -------------- #
# US State Map (land)
# -------------- #
land_sf <- st_as_sf(map("state", plot = FALSE, fill = TRUE)) %>%
filter(ID %in% States) %>%
mutate(abrev = c("CT","DE","ME","MD","MA","NH","NJ","NY","NC","RI","VA"),
state = str_to_sentence(ID)
)
# Visual exploration (o.k! )
ggplot(land_sf) +
geom_sf(aes(fill = state)) +
scale_fill_manual(values = state_pallet) +
theme_dark()
NA
NA
Used the Sea Around Us shapefle updated to June 2016.
# Load it!
eez_sf <- st_read(path_world) %>%
rename(eez_name = Name) %>%
st_transform(crs = 4326) %>% # 4326
filter(eez_name == "USA (East Coast)") %>%
st_simplify(preserveTopology = TRUE, dTolerance = 0.1) #0.1 for paper
Reading layer `SAUEEZ_July2015' from data source
`/Volumes/Enterprise/Data/Spatial/SAU/SAU_Shapefile/SAUEEZ_July2015.shp'
using driver `ESRI Shapefile'
Simple feature collection with 280 features and 7 fields
Geometry type: MULTIPOLYGON
Dimension: XY
Bounding box: xmin: -180 ymin: -63.66443 xmax: 180 ymax: 87.02394
Geodetic CRS: WGS 84
Warning in st_simplify.sfc(st_geometry(x), preserveTopology, dTolerance) :
st_simplify does not correctly simplify longitude/latitude data, dTolerance needs to be in decimal degrees
# Load it!
eez_sf <- st_read(path_world) %>%
rename(eez_name = Name) %>%
st_transform(crs = 4326) %>% # 4326
filter(eez_name == "USA (East Coast)") %>%
st_simplify(preserveTopology = TRUE, dTolerance = 0.1) #0.1 for paper
Reading layer `SAUEEZ_July2015' from data source
`/Volumes/Enterprise/Data/Spatial/SAU/SAU_Shapefile/SAUEEZ_July2015.shp'
using driver `ESRI Shapefile'
Simple feature collection with 280 features and 7 fields
Geometry type: MULTIPOLYGON
Dimension: XY
Bounding box: xmin: -180 ymin: -63.66443 xmax: 180 ymax: 87.02394
Geodetic CRS: WGS 84
Warning in st_simplify.sfc(st_geometry(x), preserveTopology, dTolerance) :
st_simplify does not correctly simplify longitude/latitude data, dTolerance needs to be in decimal degrees
# Visual exploration (o.k! )
ggplot(eez_sf) +
geom_sf() +
theme_dark()
Covers the state waters of the NE US states. Data from data.gov.
License: No license information was provided. If this work was prepared by an officer or employee of the United States government as part of that person’s official duties it is considered a U.S. Government Work.
We used Global Fishing Watch’s database on Anchorages, Ports and Voyages.
The Global Fishing Watch anchorages dataset is a global database of anchorage locations where vessels congregate. The dataset contains over 160,000 individual anchorage locations, which are further grouped into nearly 32,000 ports when applicable.
So…Looks like some ports are “far from shore” which I don’t really know what it means. For now, I am only selecting those ports that ar 0 meter from shore.
There are still too many ports in each state (?). Specially New Jersey! One approach is to use NOAA’s landing-per-port data to filter out…
This dataset includes all landings from 2015-July 2021. It includes the species landed, the port, and the Hull ID and vessel name for black sea bass and summer flounder.
License, from Sarah Smith: This is confidential data, so please do not share this file with others. You may need to contact NOAA directly just to request permission. My contact has been Alison Ferguson (Alison.Ferguson@noaa.gov) in their data department.
This approach removes a lot of ports to the point that Delaware is completely port-less
In this approach we only use the ports categorized within the WPI and with distance_from_shore_m == 0 and at_dock == TRUE.
at_dock == Anchorages within a 450 meter buffer around a combined land shape file are considered at dock.This approach might be better suited. It also has teh advantage that we are using the WPI.
This is how all the shapefiles look like together
As a first step we need to divide the NE US EEZ among the different states. For that, we expanded a buffer-polygon to the 200 nautical miles to then estimate the percentage that each expanded polygon occupied. For the initial buffer point we used two approaches; state waters and US ports. Note that in all cases these areas overlapped and percentages accounted for it. We did this by following these steps:
We set a buffer of 410000 m (410 km, ~ 221 nm) that overshoots the EEZ a bit, but is eventually cropped
We set a buffer of 5 deg (510 km, ~ 221 nm) that overshoots the EEZ a bit, but is eventually cropped
Here we expand a grid within the buffer so we can estimate the proportion of each state.
Note: Dark grey shaded area is the grid and is the same for both approaches
Once we have a gridded area, we converted the grid to a sf so we can merge it with the buffered states and ports and finally filter out everything outside the states polygon
Finally, we crop the grided buffers to within the EEZ to capture the actual water space.
Note: This step takes quite a while because of the size of the EEZ shapefile. No, you cannot use st_simplify() here
By looking at the plots we do not see that the number of grid cells that each state has is slightly different depending on the approach. This comes to light when we count the number of gridcells that each state gets in each approach. We see that the state waters one is substantially higher.
We can visualize the difference of the cropped area for each state next. Again, the fishing port approach results in less area for all states, but such area is not proportional among states. For example, Delaware looses roughly 35% of fishing grounds with this approach
grid_eez_sw_df %>%
mutate(approach = "state_waters") %>%
bind_rows(grid_eez_fp_df) %>%
mutate(approach = ifelse(is.na(approach),"fishing_port",approach)) %>%
group_by(state,approach) %>%
summarise(n_grid = n()) %>%
spread(approach,n_grid) %>%
mutate(difference =state_waters-fishing_port,
percentage_sw = round((difference/state_waters)*100)
)
`summarise()` has grouped output by 'state'. You can override using the `.groups` argument.
And here is the map visualization of the cropped difference
#### Data needed ####
# Get state order from North to south
state_order <- my_path("R","Partial/", name = "grid_eez_fp_df.csv",read = T) %>%
group_by(state) %>%
summarise(
order = mean(lat)
) %>%
mutate(abrev = c("CT","DE","ME","MD","MA","NH","NJ","NY","NC","RI","VA")
) %>%
arrange(desc(order)) %>%
mutate(order= c("b","a",letters[3:11])) # weirdly maine goes below
# Load grid of state waters
grid_eez_sw_sf <- st_read(my_path("D","Spatial/grid_eez_sw_sf", name = "grid_eez_sf.shp")) %>%
select(index,state,geometry) %>%
mutate(method = "state_waters",
state = str_to_sentence(state)) %>%
left_join(state_order)
grid_eez_sw_sf$group <- factor(grid_eez_sw_sf$abrev, # Reordering group factor levels
levels = state_order$abrev)
# Load grid of fishing ports
grid_eez_fp_sf <- st_read(my_path("D","Spatial/grid_eez_fp_sf", name = "grid_eez_fp.shp")) %>%
select(index,state = jursdct,geometry) %>%
mutate(method = "fishing_port",
state = str_to_sentence(state))%>%
left_join(state_order)
grid_eez_fp_sf$group <- factor(grid_eez_fp_sf$abrev, # Reordering group factor levels
levels = state_order$abrev)
# Plotting
# Make sure st_simplify() is run for eez sf
# State waters plot
p_sw <- gridExtra::grid.arrange(
# Overall (overlapping) position
ggplot(grid_eez_sw_sf) +
geom_sf(data = eez_sf, aes(), fill = "white") +
geom_sf(aes(color =abrev , fill = abrev), alpha = 0.3) +
geom_sf(data = land_sf, aes()) +
ggsflabel::geom_sf_label_repel(data = land_sf, aes(label= abrev),
size = 4,
box.padding = 0.10,
hjust = 1) +
scale_color_manual(values = state_pallet) +
scale_fill_manual(values = state_pallet) +
my_ggtheme_p(leg_pos = "",
ax_tx_s = 13) +
coord_sf(ylim = c(30,48)) +
scale_y_continuous(breaks = c(30,35,40,45))+
labs(x = "", y = "", title = "A) State waters extrapolation") +
theme(plot.title = element_text(size = 20)),
# Showing each state separately
ggplot(grid_eez_sw_sf) +
geom_sf(data = land_sf, aes(), fill = "grey80") +
geom_sf(aes(color = abrev),size = 0.1, alpha = 0.3) +
facet_wrap(~ group) +
theme(legend.position = "top") +
scale_color_manual(values = state_pallet,
# labels = unique(grid_eez_sw_sf$group)) +
labels = grid_eez_sw_sf %>% arrange(order) %>% pull(state) %>% unique()) +
ggtitle("") +
my_ggtheme_p(leg_pos = "",
ax_tx_s = 11,
axx_tx_ang = 45,
hjust = 1
),
nrow = 1)
##### ------------- ####
# Fishing Ports plot
##### ------------- ####
p_fp <- gridExtra::grid.arrange(
# Overall (overlapping) position
ggplot(grid_eez_fp_sf) +
geom_sf(data = eez_sf, aes(), fill = "white") +
geom_sf(aes(color = abrev, fill = abrev), alpha = 0.3) +
geom_sf(data = land_sf, aes()) +
ggsflabel::geom_sf_label_repel(data = land_sf, aes(label= abrev),
size = 4,
box.padding = 0.10,
hjust = 1) +
scale_color_manual(values = state_pallet) +
scale_fill_manual(values = state_pallet) +
my_ggtheme_p(leg_pos = "",
ax_tx_s = 13) +
coord_sf(ylim = c(30,48)) +
scale_y_continuous(breaks = c(30,35,40,45)) +
labs(x = "", y = "", title = "B) Fishing ports extrapolation") +
theme(plot.title = element_text(size = 20)),
# Showing each state separately
ggplot(grid_eez_fp_sf) +
geom_sf(data = land_sf, aes(), fill = "grey80") +
geom_sf(aes(color = abrev),size = 0.1, alpha = 0.3) +
facet_wrap(~ group) +
theme(legend.position = "top") +
scale_color_manual(values = state_pallet,
# labels = unique(grid_eez_fp_sf$group)) +
labels = grid_eez_fp_sf %>% arrange(order) %>% pull(state) %>% unique()) +
ggtitle("") +
my_ggtheme_p(leg_pos = "",
ax_tx_s = 11,
axx_tx_ang = 45,
hjust = 1
),
nrow = 1)
combined_plot <- gridExtra::grid.arrange(p_sw,p_fp,
bottom = "Longitude",
left = "Latitude")
ggsave(filename = "buffer_figure_b.jpg",
plot = combined_plot,
path = my_path("R","Figures"),
width = 11,
height = 11
)
Buffer figure for paper
In this step we interpolate the survey data within the previously created grid following a Triangular Irregular Surface method.
wtcpue < 0We need to create a couple of functions to run the whole process
This is the main function used to interpolate the data per year. It follows a Triangular Irregular Surface method using the interp::interp() function. If you want to see the function clic on code
tis <- function(input_data, grid, yr, taxa, reg){
# --------------- #
# Testing
# print(paste(yr))
# yr = 1976
# --------------- #
# Filter data
data <- input_data %>%
filter(year == yr,
spp == taxa,
region == reg
) %>%
# Average duplicated hauls in the same spot
group_by(region,year,spp,lat,lon) %>%
summarise_at(vars(wtcpue),
mean,na.rm = T)
# Only interpolate cases where there is more than 3 rows
# Marked by the function
if(nrow(data) <= 3){
fit_tin <- tibble()
}else{
# Triangular Irregular Surface
fit_tin <- interp::interp( # using {interp}
x = data$lon, # the function actually accepts coordinate vectors
y = data$lat,
z = data$wtcpue,
xo = grid$lon, # here we already define the target grid
yo = grid$lat,
output = "points"
) %>%
bind_cols() %>%
bind_cols(grid) %>%
mutate(year = yr,
region = reg,
spp = taxa) %>%
select(index, state, year, region, spp, lon=x, lat=y, value = z)
}
return(fit_tin)
}
# Test me
# Needs variables in Control panel
# Test no data: "Alosa aestivalis", reg = "Northeast US Fall", yr = 1974
# tis(input_data = ocean_data, grid = grid_eez_df, taxa = "Illex illecebrosus", reg = "Northeast US Fall", yr = 1973)
# lapply(years,tis,input_data = ocean_data, grid = grid_eez_df, taxa = "Illex illecebrosus", reg = regions[2])
This is a sub-function that runs the tis() function by taxa and region. It saves the output as a .csv file for each species.
run_tis <- function(input_data, grid, years, taxa, reg, method){
# Run tis for species and surveys
for(r in 1:2){
partial_df <- bind_rows(
lapply(years,tis,input_data = input_data, grid = grid, taxa = taxa, reg = reg[r])
)
if(r == 1){
historic_tif <- partial_df
}else{
historic_tif <- bind_rows(historic_tif,partial_df)
}
}
# ----------------------- #
# Save dataset per species
# ----------------------- #
# Set file name
name <- paste0("tif_",str_replace(taxa," ","_"),".csv")
complement <- paste0("Partial/Interpolation/",method)
# Set path name
save_path <- my_path("R",complement)
# Set complete path
save_name <- paste0(save_path,name)
# Create folder if it does not exist
if(file.exists(save_path) == F){
dir.create(save_path)
}
# Save file
write_csv(historic_tif,save_name)
return_msg <- paste("Interpolation done for", taxa)
return(return_msg)
}
# Test me
# run_tis(input_data = ocean_data, grid = grid_eez_fp_df, taxa = "Centropristis striata", years = seq(1970,1980), reg = regions, method = "testa")
The interpolation was done with NOAA Northeast Fisheries Science Center Spring and Fall Bottom Trawl Surveys data provided by Ocean adapt. Data was accessed trough the Github.
In primary publications using data from the database, please cite Pinsky et al. 2013. Marine taxa track local climate velocities. Science 341: 1239-1242 doi: 10.1126/science.1239352, as well as the original data sources.
This is just a sub-step to split up the data into single species files. This makes the app faster as it only needs to load species specific data, rather than all the data at de beginning.
This is where we load the required data and prepare to run the interpolation function. Note that some of the data here has been previously created
# Missing runs
spp <- tibble(taxa=spp) %>%
anti_join(spp_runs) %>%
pull(taxa)
Joining, by = "taxa"
Here we just run the routine for each of the species present in the Northeast US Fall and Spring surveys between 1973 and 2019.
# Run them all in parallel
mclapply(spp,
run_tis,
input_data = ocean_data,
grid = grid_eez_fp_df,
years = years,
reg = regions,
method = "fp"
)
Error in mclapply(spp, run_tis, input_data = ocean_data, grid = grid_eez_fp_df, :
could not find function "mclapply"
Results are now for eight species managed under Mid-Atlantic Council Management Plans according to NOAA.
Scomber scombrus, Peprilus triacanthus (butterfish), Illex illecebrosus (shortfin squid), Paralichthys dentatus (summer flounder), Stenotomus chrysops (Scoop), Centropristis striata (black sea bass), Pomatomus saltatrix (Bluefish), Lopholatilus chamaeleonticeps (Golden tilefish), Caulolatilus microps (blueline tilefish) and Clupea harengus
state_lat <- historic_spp %>%
group_by(state) %>%
summarise(
order = mean(lat)
) %>%
mutate(abrev = c("CT","DE","ME","MD","MA","NH","NJ","NY","NC","RI","VA")
) %>%
arrange(desc(order)) %>%
mutate(order= letters[1:11])
Error in group_by(., state) : object 'historic_spp' not found
This map shows the aggregated extrapolated value from all three species per State average across the whole study period within each State’s water.
Note: This is intended to be a supplemental figure
data_grid <- historic_spp %>%
group_by(state,lat,lon,region) %>%
summarise(sum = sum(value,na.rm= T), .groups = "drop") %>% # sum of all species
group_by(state,lat,lon,region) %>%
summarise(mean = mean(sum,na.rm= T), .groups = "drop") # average of all years
ggplot(data = subset(data_grid, !is.na(mean))) +
geom_tile(
aes(
x = lon,
y = lat,
fill = mean,
col = mean
)
) +
geom_sf(data = land_sf, aes()) +
facet_wrap(~ state + region,
ncol = 4) +
labs(x = "Longitude", y = "Latitude") +
scale_fill_viridis("Mean Interpolation", alpha = 0.8) +
scale_color_viridis("Mean Interpolation", alpha = 0.8) +
theme_bw()
Here we show the average proportion of the interpolation by State and time period. Time periods were arbitrary defined as;
Notes: Figure represents the Spring survey. This computation considers the Overlapping of state waters.
total_fited <- historic_spp %>%
group_by(year,region) %>%
summarise(total_value = sum(value,na.rm=T),.groups = "drop")
state_fit <- historic_spp %>%
group_by(state,year,region) %>%
summarise(state_value = sum(value,na.rm= T), .groups = "drop") %>%
left_join(total_fited,
by = c("year","region")) %>%
mutate(percentage = state_value/total_value*100) %>%
left_join(periods,
by = "year") %>%
group_by(state,order,label,region) %>%
summarise(mean_per = round(mean(percentage)),.groups = "drop")
# check percentages
# state_fit %>%
# group_by(period) %>%
# summarise(sum(mean_per))
p <- land_sf %>%
left_join(state_fit,
by = "state") %>%
ggplot() +
# geom_sf(aes(fill = mean_per)) +
geom_sf(aes(fill = mean_per, text = paste(state, mean_per,"% of proportion"))) +
viridis::scale_fill_viridis("Average proportion per State", alpha = 0.8) +
facet_wrap(~ region + label,
nrow = 2) +
labs(x = "Latitude",
y = "Longitude") +
my_ggtheme_p(facet_tx_s = 12,
leg_pos = "bottom") +
theme(legend.key.width = unit(1,"line")) +
theme_dark()
ggplotly(p,
tooltip = "text",
dynamicTicks = FALSE) %>%
style(hoverlabel = list(bgcolor = "white"), hoveron = "fill")
total_fited <- historic_spp %>%
group_by(year,region,spp) %>%
summarise(total_value = sum(value,na.rm=T),.groups = "drop")
state_fit <- historic_spp %>%
group_by(state,year,region,spp) %>%
summarise(state_value = sum(value,na.rm= T), .groups = "drop") %>%
left_join(total_fited,
by = c("year","region","spp")) %>%
mutate(percentage = state_value/total_value*100) %>%
left_join(periods,
by = "year") %>%
group_by(state,order,label,region,spp) %>%
summarise(mean_per = round(mean(percentage)),.groups = "drop") %>%
#Only show results for spring
filter(str_detect(region,"Spring")) %>%
mutate(spp = gsub(" ","\n",spp))
# The plot
map_plot <- land_sf %>%
left_join(state_fit,
by = "state") %>%
ggplot() +
geom_sf(aes(fill = mean_per)) +
viridis::scale_fill_viridis("Average proportion per State", alpha = 0.8) +
facet_grid(spp ~ label) +
labs(x = "Longitude",
y = "Latitude") +
my_ggtheme_p(facet_tx_s = 20,
leg_pos = "bottom",
axx_tx_ang = 45,
ax_tx_s = 12,
ax_tl_s = 18,
hjust = 1) +
theme(legend.key.width = unit(4,"line"))
ggsave(filename = "proportion_chg_spp.jpg",
plot = map_plot,
path = my_path("R","Figures"),
width = 14,
height = 14
)
Proportion change per species map for paper
total_fited <- historic_spp %>%
group_by(year,region) %>%
summarise(total_value = sum(value,na.rm=T),.groups = "drop")
state_fit <- historic_spp %>%
group_by(state,year,region) %>%
summarise(state_value = sum(value,na.rm= T), .groups = "drop") %>%
left_join(total_fited,
by = c("year","region")) %>%
mutate(percentage = state_value/total_value*100) %>%
left_join(periods,
by = "year") %>%
group_by(state,order,label,region) %>%
summarise(mean_per = round(mean(percentage)),.groups = "drop") %>%
filter(str_detect(region,"Spring"))
agg_map <- land_sf %>%
left_join(state_fit,
by = "state") %>%
ggplot() +
geom_sf(aes(fill = mean_per)) +
viridis::scale_fill_viridis("Average proportion per State", alpha = 0.8) +
facet_wrap(~ label, nrow = 1) +
labs(x = "Longitude",
y = "Latitude") +
my_ggtheme_p(facet_tx_s = 16,
leg_pos = "bottom",
leg_tl_s = 14,
ax_tx_s = 10,
ax_tl_s = 14) +
theme(legend.key.width = unit(2,"line"))
ggsave(filename = "proportion_chg_agg.jpg",
plot = agg_map,
path = my_path("R","Figures"),
width = 9,
height = 5
)
Proportion change aggregated all species map for paper
This graph shows the proportion of the interpolation value each State has over time.
Note: This plot is interactive. For ease comparison between States you can;
total_fited <- historic_spp %>%
group_by(year,region) %>%
summarise(total_value = sum(value,na.rm=T),.groups = "drop")
# group by state
state_fit <- historic_spp %>%
group_by(state,year,region) %>%
summarise(state_value = sum(value,na.rm= T), .groups = "drop") %>%
left_join(total_fited,
by = c("year","region")) %>%
mutate(percentage = state_value/total_value*100)
# Plot me
p <- ggplot(state_fit) +
geom_area(
aes(
x = year,
y = round(percentage),
fill = state
)
) +
ylab("Percentage (%)") +
# viridis::scale_fill_viridis(discrete = T, alpha = 0.8) +
scale_fill_manual(values = state_pallet) +
MyFunctions::my_ggtheme_p() +
facet_wrap(~region, ncol = 1) +
theme_dark()
ggplotly(p,
dynamicTicks = TRUE) %>%
layout(hovermode = "x") %>%
rangeslider()
This graph shows the proportion of each State smoothed over a *10 years running mean**. It allows to better see increasing and decreasing trends.
Note: This plot is also interactive and thus, follows the same options of the previous one.
# group by state
state_fit <- historic_spp %>%
group_by(state,year,region,.groups = "drop") %>%
summarise(state_value = sum(value,na.rm= T), .groups = "drop") %>%
left_join(total_fited,
by = c("year","region")) %>%
mutate(percentage = state_value/total_value*100) %>%
group_by(state,region) %>%
mutate(RMean = zoo::rollmean(x = percentage,
10,
fill = NA,
na.rm=T)
) %>%
left_join(state_lat)
# Plot me
p <- ggplot(state_fit) +
geom_area(
aes(
x = year,
y = round(RMean),
fill = state
# fill = order
)
) +
ylab("10 yrs running average (%)") +
scale_fill_manual(
"State",
values = state_pallet,
# labels = state_fit %>% arrange(order) %>% pull(state) %>% unique()
) +
MyFunctions::my_ggtheme_p() +
facet_wrap(~region, ncol = 1) +
theme_dark()
suppressWarnings(
ggplotly(p,
dynamicTicks = TRUE) %>%
layout(hovermode = "x") %>%
# add_trace() %>%
rangeslider()
)
Proportion change aggregated all species map for paper
Proportion change aggregated all species map for paper
This section of the results explores the differences between the historic distribution of te stock and the historic quota allocation. We collected the proportion of the total quota that each State gets for each species.
Right now the analysis only covers black sea bass, summer flounder and scup. But we can go speceis by species to see which ones have their quota allocated by state to include them in the analysis.
Note: We are using the new allocations based on the stock’s most recent biomass distribution*
Scup Summer period
For State-level managed species see the Atlantic States Marine Fisheries Comission website and go species-by-species.
quota_allocation <- state_lat %>%
mutate(
"Centropristis striata" = c(
0.0040, #ME
0.0040, #NH
0.1564, #MA
0.1323, #RI
0.0367, #CT
0.0857, #NY
0.2010, #NJ
0.0411, #DE
0.0888, #MD
0.1614, #VA
0.0888 #NC
),
"Paralichthys dentatus" = c(
0.0004756, #ME
0.0000046, #NH
0.0682046, #MA
0.1568298, #RI
0.0225708, #CT
0.0764699, #NY
0.1672499, #NJ
0.0001779, #DE
0.0203910, #MD
0.2131676, #VA
0.2744584 #NC
),
"Stenotomus chrysops" = c(
0.0012101, #ME
0.0000000, #HN
0.2158729, #MA
0.5619456, #RI
0.0315399, #CT
0.1582466, #NY
0.0291667, #NJ
0.0000000, #DE
0.0001190, #MD
0.0016502, #VA
0.0002490 #NC
)
) %>%
gather("spp","quota",4:6)
# Double check they all add to 1...
quota_allocation %>%
group_by(spp) %>%
summarise_at(vars(quota),sum)
quota_allocation %>%
arrange(order) %>%
select(-order) %>%
spread(spp,quota) %>%
kable()
total_fited <- historic_spp %>%
group_by(year,region,spp) %>%
summarise(total_value = sum(value,na.rm=T),.groups = "drop")
# group by state
state_fit <- historic_spp %>%
group_by(state,year,region,spp) %>%
summarise(state_value = sum(value,na.rm= T), .groups = "drop") %>%
left_join(total_fited,
by = c("year","region","spp")) %>%
mutate(percentage = state_value/total_value*100) %>%
#Only show results for spring
filter(str_detect(region,"Spring"))
quota_df <- quota_allocation %>%
left_join(historic_spp) %>%
group_by(state,abrev,year,region,spp,quota) %>%
summarise(state_value = sum(value,na.rm= T), .groups = "drop") %>%
left_join(total_fited,
by = c("year","region","spp")) %>%
mutate(Distribution = state_value/total_value*100,
Historic = quota*100) %>%
group_by(state,quota,spp) %>%
mutate(RMean = zoo::rollmean(x = Distribution,
10,
fill = NA,
na.rm=T)
) %>%
#Only show results for spring
filter(str_detect(region,"Spring"),
!is.na(RMean) # Remove NAs from rollmean
) %>%
left_join(state_lat) %>%
filter(
# spp =="Centropristis striata"
spp %in% quota_allocation$spp
)
head(quota_df) %>%
kable()
This version shows the difference between the Historic quota allocation and the distribution proportion of the stock:
# Option one
quota_df %>%
mutate(diff = Historic- Distribution,
diff_rmean = RMean- Distribution) %>%
gather("type","value",diff:diff_rmean) %>%
# View()
ggplot() +
geom_line(
aes(
x = year,
y = value,
color = order
)
) +
scale_color_manual("State",
values = state_pallet,
labels = quota_df %>% arrange(order) %>% pull(state) %>% unique()
) +
facet_grid(type~spp,
scales = "free") +
theme_dark()
Here, we plot the distributional quota (solid lines) with each State’s allocation with dashed (—-) lines
# Option one
quota_df %>%
gather("level","quota",Distribution:RMean) %>%
# mutate(diff = quota_per- percentage) %>%
# View()
ggplot() +
geom_line(
aes(
x = year,
y = quota,
color = order
)
) +
labs(x = "Year",y = "Quota (%)") +
scale_fill_manual("State",
values = state_pallet,
labels = quota_df %>% arrange(order) %>% pull(state) %>% unique()
) +
scale_color_manual("State",
values = state_pallet,
labels = quota_df %>% arrange(order) %>% pull(state) %>% unique()
) +
scale_linetype("Quota") +
facet_grid(spp ~ level)+
theme_dark()
The idea here is create and efficiency index (Danger!) that computes the alignment between the distribution and the actual allocation. The index is simply;
\[ei = \frac{HistoricQuota}{DistributionProportion}\] So, in cases where ei = 1, the quota is aligned with the distribution; when ei > 1 then the historic quota is more than the stock’s State’s distribution, contrarily, ei < 1 means that the historic quota is less than what the State currently has.
Note: - There are some crazzy outliers that have been removed, for now… - Bottom plot representing index as a Rmean
# Option one
eff_index <- quota_df %>%
# filter(
# spp =="Centropristis striata"
# ) %>%
mutate(index = Distribution/Historic,
index_rmean = RMean/Historic) %>%
gather("level","index",index:index_rmean) %>%
filter(index < 5) # there are some craaaazy outliers
# View()
ggplot(eff_index) +
geom_line(
aes(
x = year,
y = index,
color = state
),
) +
labs(x = "Year",y = "Efficiency index") +
scale_fill_manual("State",
values = state_pallet,
labels = quota_df %>% arrange(order) %>% pull(state) %>% unique()
) +
scale_color_manual("State",
values = state_pallet,
labels = quota_df %>% arrange(order) %>% pull(state) %>% unique()
) +
facet_grid(level~spp,
scales = "free")+
theme_dark()